8 research outputs found

    Measure and integral : new foundations after one hundred years

    Get PDF
    The present article aims to describe the main ideas and developments in the theory of measure and integral in the course and at the end of the first century of its existence

    Dual representations for general multiple stopping problems

    Get PDF
    In this paper, we study the dual representation for generalized multiple stopping problems, hence the pricing problem of general multiple exercise options. We derive a dual representation which allows for cashflows which are subject to volume constraints modeled by integer valued adapted processes and refraction periods modeled by stopping times. As such, this extends the works by Schoenmakers (2010), Bender (2011a), Bender (2011b), Aleksandrov and Hambly (2010), and Meinshausen and Hambly (2004) on multiple exercise options, which either take into consideration a refraction period or volume constraints, but not both simultaneously. We also allow more flexible cashflow structures than the additive structure in the above references. For example some exponential utility problems are covered by our setting. We supplement the theoretical results with an explicit Monte Carlo algorithm for constructing confidence intervals for the price of multiple exercise options and exemplify it by a numerical study on the pricing of a swing option in an electricity market.Comment: This is an updated version of WIAS preprint 1665, 23 November 201

    Wigner chaos and the fourth moment

    Get PDF
    We prove that a normalized sequence of multiple Wigner integrals (in a fixed order of free Wigner chaos) converges in law to the standard semicircular distribution if and only if the corresponding sequence of fourth moments converges to 2, the fourth moment of the semicircular law. This extends to the free probabilistic, setting some recent results by Nualart and Peccati on characterizations of central limit theorems in a fixed order of Gaussian Wiener chaos. Our proof is combinatorial, analyzing the relevant noncrossing partitions that control the moments of the integrals. We can also use these techniques to distinguish the first order of chaos from all others in terms of distributions; we then use tools from the free Malliavin calculus to give quantitative bounds on a distance between different orders of chaos. When applied to highly symmetric kernels, our results yield a new transfer principle, connecting central limit theorems in free Wigner chaos to those in Gaussian Wiener chaos. We use this to prove a new free version of an important classical theorem, the Breuer-Major theorem.Comment: Published in at http://dx.doi.org/10.1214/11-AOP657 the Annals of Probability (http://www.imstat.org/aop/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Computational complexity of graph polynomials

    Get PDF
    The thesis provides hardness and algorithmic results for graph polynomials. We observe VNP-completeness of the interlace polynomial, and we prove VNP-completeness of almost all q-restrictions of Z(G; q; x), the multivariate Tutte polynomial. Using graph transformations, we obtain point-to-point reductions for graph polynomials.We develop two general methods: Vertex/edge cloning and, more general,uniform local graph transformations. These methods unify known and new hardness-of-evaluation results for graph polynomials. We apply both methods to several examples. We show that, almost everywhere, it is #P-hard to evaluate the two-variable interlace polynomial and the (normal as well as extended) bivariate chromatic polynomial. Almost everywhere" means that the dimension of the set of exceptional points is strictly less than the dimension of the domain of the graph polynomial. We also give an inapproximability result for evaluation of the independent set polynomial. Providing a new family of reductions for the interlace polynomial that increases the instance size only polylogarithmically, we obtain an exp(Ω (n= log3 n)) time lower bound for evaluation of the independent set polynomial under a counting version of the exponential time hypothesis. We observe that the extended bivariate chromatic polynomial can be computed in vertex-exponential time. We devise a means to compute the interlace polynomial using tree decompositions. This enables a parameterized algorithm to evaluate the interlace polynomial in time linear in the size of the graph and single-exponential in the treewidth. We give several versions of the algorithm, including a parallel one and a faster way to compute the interlace polynomial of any graph. Finally, we propose two faster algorithms to compute/evaluate the interlace polynomial in special cases.Diese Arbeit beinhaltet Härteresultate und Algorithmen für Graphpolynome. Wir stellen zunächst fest, dass das Interlacepolynom VNP-vollständig ist, und wir zeigen die VNP-Vollständigkeit fast aller q-Restriktionen des multivariaten Tutte-Polynoms Z(G; q; x). Unter Verwendung von Graphtransformationen erhalten wir Punkt-zu-Punkt-Reduktionen für Graphpolynome. Dabei entwickeln wir auch zwei allgemeine Methoden: Das Klonen von Knoten bzw. Kanten und, allgemeiner, uniforme lokale Graphtransformationen. Beide Methoden vereinheitlichen bekannte und neue Härteresultate für das Auswerten von Graphpolynomen. Wir wenden beide Methoden auf verschiedene Beispiele an. Wir zeigen, dass es fast überall #P-schwer ist, das Interlacepolynom in zwei Variablen bzw. das (normale oder erweiterte) bivariatechromatische Polynom auszuwerten. Fast überall heißt hier: Überall, außerauf einer Ausnahmemenge, deren Dimension um mindestens eins kleiner ist als der Definitionsbereich des Graphpolynoms. Wir zeigen auch, dass näherungsweises Auswerten des Independent-Set-Polynoms schwer ist. Wir entwickeln eine neue Familie von Reduktionen für das Interlacepolynom, die die Instanz nur polylogarithmisch vergrößert. Damit zeigen wir, unter Annahme einer Variante der Exponentialzeit-Hypothese, dass das Auswerten des Independent-Set-Polynoms fast überall Zeit exp(Ω(n= log3 n)) benötigt. Wir stellen fest, dass das erweiterte bivariate chromatische Polynom in Zeit exponentiell in der Knotenzahl berechnet werden kann. Wir entwickeln ein Mittel, um das Interlacepolynom mit Hilfe von Baumzerlegungen zu berechnen. Das führt zu einem parametrisierten Algorithmus zum Auswerten des Interlacepolynoms mit Laufzeit linear in der Anzahl der Knoten und einfach exponentiell in der Weite der gegebenen Baumzerlegung. Wir diskutieren verschiedene Varianten dieses Algorithmus, einschließlich Parallelisierung und einer Möglichkeit, das Interlacepolynom jedes Graphen asymptotisch schneller zu berechnen. Schließlich geben wir zwei schnellere Algorithmen an, die das Interlacepolynomin speziellen Situationen berechnen

    A uniform approach to the complexity and analysis of succinct systems

    Get PDF
    “ This thesis provides a unifying view on the succinctness of systems: the capability of a modeling formalism to describe the behavior of a system of exponential size using a polynomial syntax. The key theoretical contribution is the introduction of sequential circuit machines as a new universal computation model that focuses on succinctness as the central aspect. The thesis demonstrates that many well-known modeling formalisms such as communicating state machines, linear-time temporal logic, or timed automata exhibit an immediate connection to this machine model. Once a (syntactic) connection is established, many complexity bounds for structurally restricted sequential circuit machines can be transferred to a certain formalism in a uniform manner. As a consequence, besides a far-reaching unification of independent lines of research, we are also able to provide matching complexity bounds for various analysis problems, whose complexities were not known so far. For example, we establish matching lower and upper bounds of the small witness problem and several variants of the bounded synthesis problem for timed automata, a particularly important succinct modeling formalism. Also for timed automata, our complexity-theoretic analysis leads to the identification of tractable fragments of the timed synthesis problem under partial observability. Specifically, we identify timed controller synthesis based on discrete or template-based controllers to be equivalent to model checking. Based on this discovery, we develop a new model checking-based algorithm to efficiently find feasible template instantiations. From a more practical perspective, this thesis also studies the preservation of succinctness in analysis algorithms using symbolic data structures. While efficient techniques exist for specific forms of succinctness considered in isolation, we present a general approach based on abstraction refinement to combine off-the-shelf symbolic data structures. In particular, for handling the combination of concurrency and quantitative timing behavior in networks of timed automata, we report on the tool Synthia which combines binary decision diagrams with difference bound matrices. In a comparison with the timed model checker Uppaal and the timed game solver Tiga running on standard benchmarks from the timed model checking and synthesis domain, respectively, the experimental results clearly demonstrate the effectiveness of our new approach.Diese Dissertation liefert eine vereinheitlichende Sicht auf die Kompaktheit von Systemen: die Fähigkeit eines Modellierungsformalismus, das Verhalten eines Systems exponentieller Größe mit polynomieller Syntax zu beschreiben. Der wesentliche theoretische Beitrag ist die Einführung von sequenziellen Schaltkreis-Maschinen als neues universelles Berechnungsmodell, das sich auf den zentralen Aspekt der Kompaktheit konzentriert. Die Dissertation demonstriert, dass viele bekannte Modellierungsformalismen, wie z.B. kommunizierende Zustandsmaschinen, linear-Zeit temporale Logik (LTL) oder gezeitete Automaten eine direkte Verbindung zu diesem Maschinenmodell aufzeigen. Sobald eine (syntaktische) Verbindung hergestellt ist, können viele Komplexitätsschranken für strukturell beschränkte sequenzielle Schaltkreis-Maschinen für einen bestimmten Formalismus einheitlich übernommen werden. Neben einer weitreichenden Vereinheitlichung unabhängiger Forschungsrichtungen können auch zahlreiche Komplexitätsschranken für Analyse-Probleme etabliert werden, deren genaue Komplexität bisher noch nicht bekannt war. Zum Beispiel werden passende untere und obere Schranken des small witness Problems und mehrere Varianten des Synthese-Problems von Controllern mit beschränkter Größe für gezeitete Automaten bewiesen. Die theoretische Analyse deckt Fragmente geringerer Komplexität des partiell informierten Syntheseproblems für gezeitete Automaten auf. Es wird im Besonderen gezeigt, dass das gezeitete Syntheseproblem für diskrete oder Vorlagen-basierte Controller äquivalent zum Model Checking-Problem ist. Basierend auf dieser Einsicht wird ein neuartiger Model Checking-basierter Algorithmus zur effizienten Synthese von gültigen Instantiierungen von Vorlagen entwickelt. Der praktische Beitrag der Dissertation untersucht die Erhaltung von Kompaktheit in Analyse-Algorithmen durch die Benutzung symbolischer Datenstrukturen. Es wird ein allgemeiner Ansatz zur Kombination von Standard-Datenstrukturen vorgestellt, die jeweils bisher nur in Isolation verwendet werden konnten. Insbesondere wird für die Analyse von Netzwerken von gezeiteten Automaten das Tool Synthia vorgestellt, welches binäre Entscheidungs-Diagramme mit Differenzen-Matrizen verbindet. In einem experimentellen Vergleich mit den Tools Uppaal und Tiga wird klar die Effektivität des neuen Ansatzes belegt

    Fubini-Tonelli Theorems on the basis of inner and outer premeasures

    No full text
    of inner and outer premeasure

    Bridging the Gap Between Underspecification Formalisms: Minimal Recursion Semantics as Dominance Constraints

    Get PDF
    Minimal Recursion Semantics (MRS) is the standard formalism used in large-scale HPSG grammars to model underspecified semantics. We present the first provably efficient algorithm to enumerate the readings of MRS structures. It is obtained by translating MRS into normal dominance constraints for which efficient algorithms exist

    Manawi: Using multi-word expressions and named entities to improve machine translation

    No full text
    We describe the Manawi1 (mAnEv) sys-tem submitted to the 2014 WMT transla-tion shared task. We participated in the English-Hindi (EN-HI) and Hindi-English (HI-EN) language pair and achieved 0.792 for the Translation Error Rate (TER) score2 for EN-HI, the lowest among the competing systems. Our main innova-tions are (i) the usage of outputs from NLP tools, viz. billingual multi-word ex-pression extractor and named-entity rec-ognizer to improve SMT quality and (ii) the introduction of a novel filter method based on sentence-alignment features. The Manawi system showed the potential of improving translation quality by incorpo-rating multiple NLP tools within the MT pipeline.
    corecore